Introduction
I try to be a good Netizen when it comes to
IPv6,
which has been the "next big thing" for the Internet since it was announced in
1998. I picked my home ISP in part because it supports native IPv6. I host
websites and email on a Hetzner dedicated server, running Debian. The server
itself has an IPv4 and IPv6 address, and the stuff I host is accessible via
both protocols. So far, so good.
As the Internet gets more inter-dependent, we're seeing more services which
don't just talk to users via browsers or mail clients. We're seeing
interconnected applications like OAuth or the Fediverse, where our web services
are talking to each other. And behind the scenes, the web services I host can't
talk to the IPv6 Internet. This isn't causing me any kind of practical problem
at the moment but it was bugging me, which is my favourite kind of problem to
solve.
Overview
Most of the web services I host, I do so in containers, managed via Docker. The
containers live behind Traefik. This listens
to the server's public addresses (both IPv4 and IPv6), handles the security
certificates required for HTTPS, and directs incoming requests to the right
container for the web service.
Each web service has its own network (so they can't talk to each other, to
avoid lateral movement attacks if one service gets hacked). And those internal
Docker networks only use IPv4 by default. So my task was to enable IPv6 on the
Docker networks, which then means that containers will get IPv6 addresses, and
can talk to the IPv6 Internet.
But I can't use just any IPv6 addresses! The Internet works by routing
connections - each router on the network acts like a signpost, directing
connections towards their ultimate destination. And so I can only use IP
addresses which are going to be routed back to my server.
Implementation
Hetzner provide me with a global IPv4 /27 (30 addresses) and a global IPv6
/64 (18 million trillion addresses; yes, the number of addresses in IPv6 is
ludicrous). As mentioned above, I am currently using just one of each. So I've
got a lot of IPv6 addresses that I can use for this purpose!
My plan was to assign about 0.4% of those 18 million trillion addresses (a
/72) to Docker, and split that into 256 possible networks, each of which
can have around 280 thousand billion addresses (a /80). For end-user networks
(like the one in my house) you don't tend to use networks smaller than a /64
because it breaks the standard way that devices get IPv6 addresses, known as
SLAAC. But this isn't an end-user network, and all the IPv6 addresses are being
handed out by Docker, so it doesn't matter.
I'm going to refer to the /64 routed to my server as ${PREFIX}:: for ease
of reading. By Hetzner's convention, my server's public IPv6 address is at the
bottom of the network, at ${PREFIX}::2. I chose to use the top /72 within
that, ${PREFIX}:ff00::/72, for Docker. So I need to tell Docker to use this,
and to use /80 networks by default inside it.
I do this by adding the following to /etc/docker/daemon.json:
{
"ipv6": true,
"default-address-pools": [
{ "base": "172.16.0.0/12", "size": 16 },
{ "base": "${PREFIX}:ff00::/72", "size": 80 }
]
}
This file didn't exist before I created it, and I had to add in the IPv4
address pool as per the Docker default. Then it's just a matter of restarting
the Docker daemon and testing it out!
Testing
On restarting the daemon, I had a look at my existing Docker networks and
containers. Most of them hadn't changed at all - IPv6 isn't enabled by default
on networks. The deprecated bridge network was using it however, where some
utility containers like Watchtower run. My
Watchtower container now had an IPv6 address!
Reachability
The first thing I wanted to test is whether these global addresses meant that
any containers were reachable from the Internet. Mostly I put containers on
separate networks to avoid them being accessed from each other, so having
direct Internet connections would present an unconsidered security risk.
The bridge network had taken the first network within the permitted pool,
${PREFIX}:ff00::/80 and the Watchtower container was now at
${PREFIX}:ff00::2. I tried to ping this container from my home laptop, and
was happy that it failed.
Next up, I tried creating an IPv6-only network via Docker and add
whoami to it. This grabbed the second
network from the pool, ${PREFIX}:ff01::/80 and put the first container on the
network at ${PREFIX}:ff01::2.
docker network create --ipv4=false --ipv6=true ipv6_test
docker container run -d --network ipv6_test --name whoami traefik/whoami
From the host, I could curl http://${PREFIX}:ff01::02/ and get a response
from the whoami application showing me its IP addresses, with no IPv4 address
except localhost.
I tried the same curl from my home laptop and was again happy that it didn't
work.
Outbound Connections
One last thing remained to test - my containers weren't reachable from the
Internet, but could they reach the IPv6 Internet themselves? This was, after
all, the purpose of the exercise.
I found a container which bundled network tools, and ran it on my ipv6_test
network. I could reach IPv6-only sites with curl, and traceroute across the
IPv6 Internet back to my house.
Summary
IPv6 mostly Just Works for me at home; I connect my laptop to the WiFi the same
way I always did, I configure nothing, and I can reach services over IPv6. This
is exactly the user experience we want, and it should be the bare minimum.
Unfortunately I had to go choose a particular ISP for it, as some of the major
UK players like Virgin still don't routinely offer this after 28 years of IPv6.
I'm prepared to put a bit of effort into my self-hosted services, as a learning
exercise, to get them working nicely. The hardest part was thinking through
what I wanted to do.
I still want to get a bit more comfortable with how Docker configures
firewalling to be 100% certain that my containers aren't reachable from the
Internet before I add IPv6 support to my existing web services, but I'm pretty
happy with how this has gone!
I hope this guide is useful to other people in the same boat as me.